摘要 :
Progresses has been witnessed in single image superresolution in which the low-resolution images are simulated by bicubic downsampling. However, for the complex image degradation in the wild such as downsampling, blurring, noises,...
展开
Progresses has been witnessed in single image superresolution in which the low-resolution images are simulated by bicubic downsampling. However, for the complex image degradation in the wild such as downsampling, blurring, noises, and geometric deformation, the existing superresolution methods do not work well. Inspired by a persistent memory network which has been proven to be effective in image restoration, we implement the core idea of human memory on the deep residual convolutional neural network. Two types of memory blocks are designed for the NTIRE2018 challenge. We embed the two types of memory blocks in the framework of enhanced super resolution network (EDSR), which is the NTIRE2017 champion method. The residual blocks of EDSR is replaced by two types of memory blocks. The first type of memory block is a residual module, and one memory block contains four residual modules with four residual blocks followed by a gate unit, which adaptively selects the features needed to store. The second type of memory block is a residual dilated convolutional block, which contains seven dilated convolution layers linked to a gate unit. The two proposed models not only improve the super-resolution performance but also mitigate the image degradation of noises and blurring. Experimental results on the DIV2K dataset demonstrate our models achieve better performance than EDSR.
收起
摘要 :
Progresses has been witnessed in single image superresolution in which the low-resolution images are simulated by bicubic downsampling. However, for the complex image degradation in the wild such as downsampling, blurring, noises,...
展开
Progresses has been witnessed in single image superresolution in which the low-resolution images are simulated by bicubic downsampling. However, for the complex image degradation in the wild such as downsampling, blurring, noises, and geometric deformation, the existing superresolution methods do not work well. Inspired by a persistent memory network which has been proven to be effective in image restoration, we implement the core idea of human memory on the deep residual convolutional neural network. Two types of memory blocks are designed for the NTIRE2018 challenge. We embed the two types of memory blocks in the framework of enhanced super resolution network (EDSR), which is the NTIRE2017 champion method. The residual blocks of EDSR is replaced by two types of memory blocks. The first type of memory block is a residual module, and one memory block contains four residual modules with four residual blocks followed by a gate unit, which adaptively selects the features needed to store. The second type of memory block is a residual dilated convolutional block, which contains seven dilated convolution layers linked to a gate unit. The two proposed models not only improve the super-resolution performance but also mitigate the image degradation of noises and blurring. Experimental results on the DIV2K dataset demonstrate our models achieve better performance than EDSR.
收起
摘要 :
A reference oscillator utilizing a 60MHz, MEMS-based, wine glass disk vibrating micromechanical resonator with a Q of 48,000 and sufficient power handling capability to achieve a far-from-carrier phase noise of -130dBc/Hz is demon...
展开
A reference oscillator utilizing a 60MHz, MEMS-based, wine glass disk vibrating micromechanical resonator with a Q of 48,000 and sufficient power handling capability to achieve a far-from-carrier phase noise of -130dBc/Hz is demonstrated. When divided down to 10MHz, this corresponds to an effective level of -145dBc/Hz.
收起
摘要 :
A reference oscillator utilizing a 60MHz, MEMS-based, wine glass disk vibrating micromechanical resonator with a Q of 48,000 and sufficient power handling capability to achieve a far-from-carrier phase noise of -130dBc/Hz is demon...
展开
A reference oscillator utilizing a 60MHz, MEMS-based, wine glass disk vibrating micromechanical resonator with a Q of 48,000 and sufficient power handling capability to achieve a far-from-carrier phase noise of -130dBc/Hz is demonstrated. When divided down to 10MHz, this corresponds to an effective level of -145dBc/Hz.
收起
摘要 :
A reference oscillator utilizing a 60MHz, MEMS-based, wine glass disk vibrating micromechanical resonator with a Q of 48,000 and sufficient power handling capability to achieve a far-from-carrier phase noise of -130dBc/Hz is demon...
展开
A reference oscillator utilizing a 60MHz, MEMS-based, wine glass disk vibrating micromechanical resonator with a Q of 48,000 and sufficient power handling capability to achieve a far-from-carrier phase noise of -130dBc/Hz is demonstrated. When divided down to 10MHz, this corresponds to an effective level of -145dBc/Hz.
收起
摘要 :
This paper reviews the NTIRE 2022 challenge on efficient single image super-resolution with focus on the proposed solutions and results. The task of the challenge was to super-resolve an input image with a magnification factor of ...
展开
This paper reviews the NTIRE 2022 challenge on efficient single image super-resolution with focus on the proposed solutions and results. The task of the challenge was to super-resolve an input image with a magnification factor of ×4 based on pairs of low and corresponding high resolution images. The aim was to design a network for single image super-resolution that achieved improvement of efficiency measured according to several metrics including runtime, parameters, FLOPs, activations, and memory consumption while at least maintaining the PSNR of 29.00dB on DIV2K validation set. IMDN is set as the baseline for efficiency measurement. The challenge had 3 tracks including the main track (runtime), sub-track one (model complexity), and sub-track two (overall performance). In the main track, the practical runtime performance of the submissions was evaluated. The rank of the teams were determined directly by the absolute value of the average runtime on the validation set and test set. In sub-track one, the number of parameters and FLOPs were considered. And the individual rankings of the two metrics were summed up to determine a final ranking in this track. In sub-track two, all of the five metrics mentioned in the description of the challenge including runtime, parameter count, FLOPs, activations, and memory consumption were considered. Similar to sub-track one, the rankings of five metrics were summed up to determine a final ranking. The challenge had 303 registered participants, and 43 teams made valid submissions. They gauge the state-of-the-art in efficient single image super-resolution.
收起
摘要 :
Deep neural networks with a massive number of layers have made a remarkable breakthrough on single image super-resolution (SR), but sacrifice computation complexity and memory storage. To address this problem, we focus on the ligh...
展开
Deep neural networks with a massive number of layers have made a remarkable breakthrough on single image super-resolution (SR), but sacrifice computation complexity and memory storage. To address this problem, we focus on the lightweight models for fast and accurate image SR. Due to the frequent use of residual block (RB) in SR models, we pursue an economical structure to adaptively combine RBs. Drawing lessons from lattice filter bank, we design the lattice block (LB) in which two butterfly structures are applied to combine two RBs. LB has the potential of various linear combinations of two RBs. Each case of LB depends on the combination coefficients which are determined by the attention mechanism. LB favors the lightweight SR model with the reduction of about half amount of the parameters while keeping the similar SR performance. Moreover, we propose a lightweight SR model, LatticeNet, which uses series connection of LBs and the backward feature fusion. Extensive experiments demonstrate that our proposal can achieve superior accuracy on four available benchmark datasets against other state-of-the-art methods, while maintaining relatively low computation and memory requirements.
收起
摘要 :
Deep neural networks with a massive number of layers have made a remarkable breakthrough on single image super-resolution (SR), but sacrifice computation complexity and memory storage. To address this problem, we focus on the ligh...
展开
Deep neural networks with a massive number of layers have made a remarkable breakthrough on single image super-resolution (SR), but sacrifice computation complexity and memory storage. To address this problem, we focus on the lightweight models for fast and accurate image SR. Due to the frequent use of residual block (RB) in SR models, we pursue an economical structure to adaptively combine RBs. Drawing lessons from lattice filter bank, we design the lattice block (LB) in which two butterfly structures are applied to combine two RBs. LB has the potential of various linear combinations of two RBs. Each case of LB depends on the combination coefficients which are determined by the attention mechanism. LB favors the lightweight SR model with the reduction of about half amount of the parameters while keeping the similar SR performance. Moreover, we propose a lightweight SR model, LatticeNet, which uses series connection of LBs and the backward feature fusion. Extensive experiments demonstrate that our proposal can achieve superior accuracy on four available benchmark datasets against other state-of-the-art methods, while maintaining relatively low computation and memory requirements.
收起